Lecture Note III: Least-Squares Method

نویسنده

  • Zhiqiang Cai
چکیده

where L = (Lij)m×n is a block m×n matrix differential operator of at most first order, B = (Bij)l×n is a block l × n matrix operator, U = (Ui)n×1 is unknown, F = (Fi)m×1 is a given block vector-valued function defined in Ω, G = (Gi)l×1 is a given block vector-valued function defined on ∂Ω. Assume that first-order system (1.1) has a unique solution U . Boundary conditions in a least-squares formulation can be imposed either strongly (in the solution space) or weakly (by adding boundary functionals). For simplicity of presentation, we impose them in the solution space Φ. Assume that Φ is appropriately chosen so that least-squares functional is well defined. Define the least-squares functional by

برای دانلود متن کامل این مقاله و بیش از 32 میلیون مقاله دیگر ابتدا ثبت نام کنید

ثبت نام

اگر عضو سایت هستید لطفا وارد حساب کاربری خود شوید

منابع مشابه

L1-norm Penalized Least Squares with Salsa

This lecture note describes an iterative optimization algorithm, ‘SALSA’, for solving L1-norm penalized least squares problems. We describe the use of SALSA for sparse signal representation and approximation, especially with overcomplete Parseval transforms. We also illustrate the use of SALSA to perform basis pursuit (BP), basis pursuit denoising (BPD), and morphological component analysis (MC...

متن کامل

Multi-View Point Cloud Kernels for Semi-Supervised Learning

In semi-supervised learning (SSL), we learn a predictive model from a collection of labeled data and a typically much larger collection of unlabeled data. These lecture notes present a framework called multi-view point cloud regularization (MVPCR) [5], which unifies and generalizes several semi-supervised kernel methods that are based on data-dependent regularization in reproducing kernel Hilbe...

متن کامل

A Kernel for Semi-Supervised Learning With Multi-View Point Cloud Regularization

In semi-supervised learning (SSL), we learn a predictive model from a collection of labeled data and a typically much larger collection of unlabeled data. These lecture notes present a framework called multi-view point cloud regularization (MVPCR) [5], which unifies and generalizes several semi-supervised kernel methods that are based on data-dependent regularization in reproducing kernel Hilbe...

متن کامل

Sparse kernel learning with LASSO and Bayesian inference algorithm

Kernelized LASSO (Least Absolute Selection and Shrinkage Operator) has been investigated in two separate recent papers [Gao, J., Antolovich, M., & Kwan, P. H. (2008). L1 LASSO and its Bayesian inference. In W. Wobcke, & M. Zhang (Eds.), Lecture notes in computer science: Vol. 5360 (pp. 318-324); Wang, G., Yeung, D. Y., & Lochovsky, F. (2007). The kernel path in kernelized LASSO. In Internationa...

متن کامل

A Note on an Iterative Least Squares Estimation Method

In this note we suggest a new iterative least squares method for estimating scalar and vector ARMA models. A Monte Carlo study shows that the method has better small sample properties than existing least squares methods and compares favourably with maximum likelihood estimation as well.

متن کامل

ذخیره در منابع من


  با ذخیره ی این منبع در منابع من، دسترسی به آن را برای استفاده های بعدی آسان تر کنید

برای دانلود متن کامل این مقاله و بیش از 32 میلیون مقاله دیگر ابتدا ثبت نام کنید

ثبت نام

اگر عضو سایت هستید لطفا وارد حساب کاربری خود شوید

عنوان ژورنال:

دوره   شماره 

صفحات  -

تاریخ انتشار 2004